Goto

Collaborating Authors

 confounding-robust policy evaluation



Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning

Neural Information Processing Systems

Off-policy evaluation of sequential decision policies from observational data is necessary in applications of batch reinforcement learning such as education and healthcare. In such settings, however, unobserved variables confound observed actions, rendering exact evaluation of new policies impossible, i.e, unidentifiable. We develop a robust approach that estimates sharp bounds on the (unidentifiable) value of a given policy in an infinite-horizon problem given data from another policy with unobserved confounding, subject to a sensitivity model. We consider stationary unobserved confounding and compute bounds by optimizing over the set of all stationary state-occupancy ratios that agree with a new partially identified estimating equation and the sensitivity model. We prove convergence to the sharp bounds as we collect more confounded data. Although checking set membership is a linear program, the support function is given by a difficult nonconvex optimization problem. We develop approximations based on nonconvex projected gradient descent and demonstrate the resulting bounds empirically.


Review for NeurIPS paper: Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning

Neural Information Processing Systems

Additional Feedback: I really enjoyed this paper, so my comments mostly have to do with making the derivations a bit more readable. The main steps that I got hung up on in reading where the marginalization step, moving from weights beta to weights g, and the step where the matrix A(g) is defined. In both cases, I think some prose description of exactly what the transformation is would be helpful. For the weights g, I think the direct interpretation (the last expression in the line defining g_k(a j) is more intuitive than the definition in terms of beta. It is not obvious how one moves from one to the other (especially with the inverse migrating out of the summation).


Review for NeurIPS paper: Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning

Neural Information Processing Systems

Overall, the reviewers found the paper technically sound, novel, and significant. Personally, I find it quite exciting since it's the first to consider the problem of partial identification in settings with an infinite horizon. My suggestion to improve the paper is to take into account the reviewers' issues and recommendations. After all, my recommendation is "accept."


Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning

Neural Information Processing Systems

Off-policy evaluation of sequential decision policies from observational data is necessary in applications of batch reinforcement learning such as education and healthcare. In such settings, however, unobserved variables confound observed actions, rendering exact evaluation of new policies impossible, i.e, unidentifiable. We develop a robust approach that estimates sharp bounds on the (unidentifiable) value of a given policy in an infinite-horizon problem given data from another policy with unobserved confounding, subject to a sensitivity model. We consider stationary unobserved confounding and compute bounds by optimizing over the set of all stationary state-occupancy ratios that agree with a new partially identified estimating equation and the sensitivity model. We prove convergence to the sharp bounds as we collect more confounded data.